Facial Keypoints Detection¶

image.png

  • The marking of important areas of the face, such as the eyes, corners of the mouth, and nose, that are relevant for a variety of tasks, such as face filters, emotion recognition, and pose recognition, using convolutional neural network and computer vision techniques to perform facial Keypoint detection.

  • It entails forecasting the Facial Key points coordinates for a particular face, such as nose tip, the center of eyes, and so on. To recognize facial key points, we use a Convolutional Neural Network-based model.

  • Convolutional Neural Networks (CNN) has a deep structure that allows them to extract high-level information and provide better precision when identifying each important point. Convolutional networks are designed to anticipate all points at the same time.

Import Required Libraries¶

In [1]:
import os
import cv2
import time
import math
import keras
import tensorflow
from PIL import Image
import pandas as pd 
import numpy as np
import missingno as msno
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.utils import shuffle
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dropout, GlobalAveragePooling2D, Activation
from keras.layers import Flatten, Dense
from keras.layers import normalization
from tensorflow.keras.layers import BatchNormalization
from keras import optimizers
from keras.callbacks import ModelCheckpoint, History

Importing train & test dataset¶

In [2]:
train = pd.read_csv('training.csv', header=0)
test = pd.read_csv('test.csv', header=0)
X_test, y_test = test[:-1], test["Image"]
In [3]:
len(train)
Out[3]:
7049
In [4]:
train.T.head()
Out[4]:
0 1 2 3 4 5 6 7 8 9 ... 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048
left_eye_center_x 66.033564 64.332936 65.057053 65.225739 66.725301 69.680748 64.131866 67.468893 65.80288 64.121231 ... 69.229935 63.352951 65.711151 67.929319 66.867222 67.402546 66.1344 66.690732 70.965082 66.938311
left_eye_center_y 39.002274 34.970077 34.909642 37.261774 39.621261 39.968748 34.29004 39.413452 34.7552 36.740308 ... 38.575634 35.671311 38.843545 35.846552 37.356855 31.842551 38.365501 36.845221 39.853666 43.42451
right_eye_center_x 30.227008 29.949277 30.903789 32.023096 32.24481 29.183551 29.578953 29.355961 27.47584 29.468923 ... 29.407912 33.952078 32.268751 28.68782 30.750926 29.746749 30.478626 31.66642 30.543285 31.096059
right_eye_center_y 36.421678 33.448715 34.909642 37.261774 38.042032 37.563364 33.13804 39.621717 36.1856 38.390154 ... 38.34545 40.816448 37.706043 41.452484 40.115743 38.632942 39.950198 39.685042 40.772339 39.528604
left_eye_inner_corner_x 59.582075 58.85617 59.412 60.003339 58.56589 62.864299 57.797154 59.554951 58.65216 58.620923 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 7049 columns

Check for null values¶

In [5]:
print(f'Feature \t\t\t Missing \t Percentage missing\n')
for k,v in train.isna().sum().items():
    print(f'{k !s:30} :{v :8} \t {round(v/len(train),2)}%')
Feature 			 Missing 	 Percentage missing

left_eye_center_x              :      10 	 0.0%
left_eye_center_y              :      10 	 0.0%
right_eye_center_x             :      13 	 0.0%
right_eye_center_y             :      13 	 0.0%
left_eye_inner_corner_x        :    4778 	 0.68%
left_eye_inner_corner_y        :    4778 	 0.68%
left_eye_outer_corner_x        :    4782 	 0.68%
left_eye_outer_corner_y        :    4782 	 0.68%
right_eye_inner_corner_x       :    4781 	 0.68%
right_eye_inner_corner_y       :    4781 	 0.68%
right_eye_outer_corner_x       :    4781 	 0.68%
right_eye_outer_corner_y       :    4781 	 0.68%
left_eyebrow_inner_end_x       :    4779 	 0.68%
left_eyebrow_inner_end_y       :    4779 	 0.68%
left_eyebrow_outer_end_x       :    4824 	 0.68%
left_eyebrow_outer_end_y       :    4824 	 0.68%
right_eyebrow_inner_end_x      :    4779 	 0.68%
right_eyebrow_inner_end_y      :    4779 	 0.68%
right_eyebrow_outer_end_x      :    4813 	 0.68%
right_eyebrow_outer_end_y      :    4813 	 0.68%
nose_tip_x                     :       0 	 0.0%
nose_tip_y                     :       0 	 0.0%
mouth_left_corner_x            :    4780 	 0.68%
mouth_left_corner_y            :    4780 	 0.68%
mouth_right_corner_x           :    4779 	 0.68%
mouth_right_corner_y           :    4779 	 0.68%
mouth_center_top_lip_x         :    4774 	 0.68%
mouth_center_top_lip_y         :    4774 	 0.68%
mouth_center_bottom_lip_x      :      33 	 0.0%
mouth_center_bottom_lip_y      :      33 	 0.0%
Image                          :       0 	 0.0%

Train data¶

In [6]:
import missingno as msno
msno.bar(train,color=(1, 0, 0))
Out[6]:
<Axes: >

Test data¶

In [7]:
import missingno as msno
msno.bar(test,color=(0.98, 0.28, 0.39))
Out[7]:
<Axes: >
In [8]:
# to remove rows with missing values
train.dropna(inplace = True)
In [9]:
# sum of all missing values
train.isnull().sum().sum()
Out[9]:
0
In [10]:
# to display summary of dataset structure and content
train.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2140 entries, 0 to 2283
Data columns (total 31 columns):
 #   Column                     Non-Null Count  Dtype  
---  ------                     --------------  -----  
 0   left_eye_center_x          2140 non-null   float64
 1   left_eye_center_y          2140 non-null   float64
 2   right_eye_center_x         2140 non-null   float64
 3   right_eye_center_y         2140 non-null   float64
 4   left_eye_inner_corner_x    2140 non-null   float64
 5   left_eye_inner_corner_y    2140 non-null   float64
 6   left_eye_outer_corner_x    2140 non-null   float64
 7   left_eye_outer_corner_y    2140 non-null   float64
 8   right_eye_inner_corner_x   2140 non-null   float64
 9   right_eye_inner_corner_y   2140 non-null   float64
 10  right_eye_outer_corner_x   2140 non-null   float64
 11  right_eye_outer_corner_y   2140 non-null   float64
 12  left_eyebrow_inner_end_x   2140 non-null   float64
 13  left_eyebrow_inner_end_y   2140 non-null   float64
 14  left_eyebrow_outer_end_x   2140 non-null   float64
 15  left_eyebrow_outer_end_y   2140 non-null   float64
 16  right_eyebrow_inner_end_x  2140 non-null   float64
 17  right_eyebrow_inner_end_y  2140 non-null   float64
 18  right_eyebrow_outer_end_x  2140 non-null   float64
 19  right_eyebrow_outer_end_y  2140 non-null   float64
 20  nose_tip_x                 2140 non-null   float64
 21  nose_tip_y                 2140 non-null   float64
 22  mouth_left_corner_x        2140 non-null   float64
 23  mouth_left_corner_y        2140 non-null   float64
 24  mouth_right_corner_x       2140 non-null   float64
 25  mouth_right_corner_y       2140 non-null   float64
 26  mouth_center_top_lip_x     2140 non-null   float64
 27  mouth_center_top_lip_y     2140 non-null   float64
 28  mouth_center_bottom_lip_x  2140 non-null   float64
 29  mouth_center_bottom_lip_y  2140 non-null   float64
 30  Image                      2140 non-null   object 
dtypes: float64(30), object(1)
memory usage: 535.0+ KB
In [11]:
train['Image'].head()
Out[11]:
0    238 236 237 238 240 240 239 241 241 243 240 23...
1    219 215 204 196 204 211 212 200 180 168 178 19...
2    144 142 159 180 188 188 184 180 167 132 84 59 ...
3    193 192 193 194 194 194 193 192 168 111 50 12 ...
4    147 148 160 196 215 214 216 217 219 220 206 18...
Name: Image, dtype: object
In [12]:
len(train)
Out[12]:
2140
In [13]:
# to set a new index for the "train" dataframe
keys = np.arange(0, 2140)
train = train.set_index(keys)
train.T.head(5)
Out[13]:
0 1 2 3 4 5 6 7 8 9 ... 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139
left_eye_center_x 66.033564 64.332936 65.057053 65.225739 66.725301 69.680748 64.131866 67.468893 65.80288 64.121231 ... 66.827593 64.126581 63.738273 64.644616 64.383649 67.180378 65.72449 68.430866 64.15218 66.683755
left_eye_center_y 39.002274 34.970077 34.909642 37.261774 39.621261 39.968748 34.29004 39.413452 34.7552 36.740308 ... 30.620361 33.096095 34.407682 34.280084 35.104561 35.816373 36.30102 38.651975 30.691592 34.483429
right_eye_center_x 30.227008 29.949277 30.903789 32.023096 32.24481 29.183551 29.578953 29.355961 27.47584 29.468923 ... 25.111875 25.716503 26.854206 28.284307 30.424912 33.239956 25.377551 28.895857 27.000898 30.78449
right_eye_center_y 36.421678 33.448715 34.909642 37.261774 38.042032 37.563364 33.13804 39.621717 36.1856 38.390154 ... 33.298484 38.118015 39.145823 38.586911 33.399298 34.921932 37.311224 37.617027 40.868082 38.578939
left_eye_inner_corner_x 59.582075 58.85617 59.412 60.003339 58.56589 62.864299 57.797154 59.554951 58.65216 58.620923 ... 58.383626 57.887226 56.297606 57.75651 57.814386 59.347973 58.530612 61.65935 56.505624 59.255347

5 rows × 2140 columns

In [14]:
type(train)
Out[14]:
pandas.core.frame.DataFrame
In [15]:
# list of lists(contains pixel values for images)
imag = []  
for i in range(len(train)):
    if i==210 or i==350 or i==499 or i==512 or i==810 or i==839 or i==895 or i==1058 or i==1194:
        continue
    img = train['Image'][i].split(' ')
    img = [pixel if pixel != '' else '0' for pixel in img]
   
    imag.append(img)
In [16]:
image_list = np.array(imag, dtype = 'float')
X_train = image_list.reshape(-1, 96, 96, 1)
targets = np.array(train.iloc[:, :-1])
In [17]:
print('Shape of images',X_train.shape)
print('Shape of targets',targets.shape)
Shape of images (2131, 96, 96, 1)
Shape of targets (2140, 30)

Data Visualization¶

In [18]:
#  Visualize
viz = np.array([train['Image'][i].split(' ') for i in range(len(train))],dtype='float')
keys = train.drop(['Image'], axis=1)
In [19]:
# Gallery using Matplotlib 
fig, ax = plt.subplots(5, 5, figsize = (12,12), dpi = 100)
axes = ax.ravel()

for idx,ax  in enumerate(axes):
    ax.imshow(viz[idx].reshape(96, 96, 1))
    photo_visualize_pnts = keys.iloc[idx].values
fig.show()
C:\Users\PRITI CHAUDHARY\AppData\Local\Temp\ipykernel_3592\894729145.py:8: UserWarning: Matplotlib is currently using module://matplotlib_inline.backend_inline, which is a non-GUI backend, so cannot show the figure.
  fig.show()
In [20]:
# Gallery with keypoints marking 
fig, ax = plt.subplots(5, 5, figsize = (12,12), dpi = 100)
axes = ax.ravel()

for idx,ax  in enumerate(axes):
    ax.imshow(viz[idx].reshape(96, 96, 1))
    photo_visualize_pnts = keys.iloc[idx].values
    ax.scatter(photo_visualize_pnts[0::2], photo_visualize_pnts[1::2], c='Red', marker='*')
fig.show()
C:\Users\PRITI CHAUDHARY\AppData\Local\Temp\ipykernel_3592\1352036133.py:9: UserWarning: Matplotlib is currently using module://matplotlib_inline.backend_inline, which is a non-GUI backend, so cannot show the figure.
  fig.show()

Augmentation¶

We can add different augmenations like -

  • flipping
  • rotation
  • cropping
  • adding noise
  • bluring
  • brightness

etc..

In [21]:
# lets create an util to display the augumentation
def display_augmentation(img, feat, img_f, feat_f):
    plt.figure(figsize=(8,8))
    plt.subplot(2,2,1)
    plt.scatter(feat[0::2],-feat[1::2],c='r',marker='x')
    plt.subplot(2,2,2)
    plt.scatter(feat_f[0::2],-feat_f[1::2],c='r',marker='x')
    plt.subplot(2,2,3)
    display_images(img, feat)
    plt.subplot(2,2,4)
    display_images(img_f, feat_f)

Flipping¶

In [22]:
# an util that give flipped images and targets
def flipping_augmentation(images, features):
    flipped_images = np.flip(images, axis=2)
    
    flipped_features = features.copy()
    for i, feat in enumerate(flipped_features):
        for j, val in enumerate(feat):
            if j%2==0:
                flipped_features[i][j] = 96-val
            
    return flipped_images, flipped_features

# to create an object to keep track of the augmentations
augmentation_functions = {
    'flip' : flipping_augmentation
}

Cropping¶

In [23]:
# lets make an util that give us cropped images and targets
def crop_augmentation(images, targets):
    cropped_images = images.copy()

    for i in range(len(images)):
        cropped_images[i,:,:10] = 0
        cropped_images[i,:,86:] = 0
        cropped_images[i,:10,:] = 0
        cropped_images[i,86:,:] = 0

    return cropped_images, targets


augmentation_functions['crop']=crop_augmentation
    

Brightness¶

In [24]:
# to create an util that adds brightness
def brightness_augmentation(images, features, factor=1.5):
    bright = []
    for img in images:
        bright.append(np.clip(img*factor, 0, 255))
    return np.array(bright), features

augmentation_functions['brightness'] = brightness_augmentation

Adding noise¶

In [25]:
# to create an utility that adds noise to the image
def noise_augmentation(images, features, factor):
    augmented = []
    noise = np.random.randint(low=0, high=255, size=images.shape[1:])
    for img in images:
        img = img + (noise*factor)
        augmented.append(img)
    
    return np.array(augmented), features

augmentation_functions['noise'] = noise_augmentation

Lets prepare our training data¶

In [26]:
print('Shape of image data', X_train.shape)
print('Shape of target data', targets.shape)
Shape of image data (2131, 96, 96, 1)
Shape of target data (2140, 30)
In [27]:
# ADDING AUGMENTATION

def augmentation(img, feat , method):
    aug_img, aug_feat = method
    img = np.concatenate([img,aug_img])
    feat = np.concatenate([feat,aug_feat])
    return img, feat


# flip
method = flipping_augmentation(X_train, targets)
augmented_images, augmented_targets = augmentation(X_train, targets, method)
print('image augmentation : flipping')

# crop
method = crop_augmentation(X_train, targets)
augmented_images, augmented_targets = augmentation(augmented_images, augmented_targets, method)
print('image augmentation : cropping')

# brightness
method = brightness_augmentation(X_train, targets, factor=2.0)
augmented_images, augmented_targets = augmentation(augmented_images, augmented_targets, method)
print('image augmentation : adding brightness')


# noise
method = noise_augmentation(X_train, targets, factor=0.2)
augmented_images, augmented_targets = augmentation(augmented_images, augmented_targets, method)
print('image augmentation : adding noise')
image augmentation : flipping
image augmentation : cropping
image augmentation : adding brightness
image augmentation : adding noise
In [28]:
print('Shape of data after augmentation')
print('Shape of image data',augmented_images.shape)
print('Shape of target data', augmented_targets.shape)
Shape of data after augmentation
Shape of image data (10655, 96, 96, 1)
Shape of target data (10700, 30)
In [29]:
# to create an util that will display the images
def display_images(img, feat):
    plt.imshow(img, cmap=plt.cm.gray);
    plt.scatter(feat[0::2], feat[1::2], c='r', marker='x')
    
In [30]:
# to check our data one last time before we start building models

def visualize_data(images, targets):
    plt.figure(figsize=(12,12))
    for i in range(10):
        idx = np.random.randint(images.shape[0])
        plt.subplot(2,5,i+1)
        display_images(images[idx], targets[idx])
        plt.axis('off')
    plt.subplots_adjust(bottom=0.5)
    plt.show()

visualize_data(augmented_images, augmented_targets)

Modeling¶

In [31]:
pretrained_model = tf.keras.applications.mobilenet_v2.MobileNetV2(input_shape=(96,96,3),include_top=False,weights='imagenet')
In [32]:
%%time
augmented_images = tf.keras.applications.mobilenet_v2.preprocess_input(augmented_images)
CPU times: total: 250 ms
Wall time: 236 ms
In [33]:
%%time

# to check our data to make sure everything is fine
visualize_data(augmented_images,augmented_targets)
CPU times: total: 609 ms
Wall time: 571 ms
In [34]:
# to create TensorFlow datasets from NumPy arrays to efficiently feed data to our model
images_ds = tf.data.Dataset.from_tensor_slices(augmented_images)
targets_ds = tf.data.Dataset.from_tensor_slices(augmented_targets)

ds = tf.data.Dataset.zip((images_ds, targets_ds))
ds = ds.shuffle(buffer_size=augmented_targets.shape[0])
ds = ds.batch(64)
ds = ds.prefetch(tf.data.AUTOTUNE)
In [35]:
# to split a TensorFlow dataset "ds" into two subsets for training and validation
train_ds = ds.skip(10).shuffle(100)
val_ds = ds.take(10)

FFNN¶

In [36]:
# to create a preprocessing layer
class ImageTile(tf.keras.layers.Layer):
    def __init__(self):
        super().__init__(trainable = False)
        
    def call(self, inputs):
        return tf.tile(inputs,tf.constant([1,1,1,3]))
In [37]:
def FFNN(units):
    return tf.keras.Sequential([
        tf.keras.layers.Dense(units),  
        tf.keras.layers.BatchNormalization(),
        tf.keras.layers.Activation('relu'),
    ])
In [38]:
block = []
for units in [512,256,128,64]:
    block.append(FFNN(units))
In [39]:
model = tf.keras.Sequential([
    tf.keras.Input(shape=(96,96,1)),
    ImageTile(),
    pretrained_model,
    tf.keras.layers.GlobalMaxPooling2D(),
    *block,
    tf.keras.layers.Dense(30)])
In [40]:
model.layers[1].trainable=False
In [41]:
model.compile(optimizer='adam',
              loss=tf.keras.losses.MeanSquaredError(),
              metrics=['accuracy', 'mae', 'mse'])
In [42]:
model.summary()
Model: "sequential_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 image_tile (ImageTile)      (None, 96, 96, 3)         0         
                                                                 
 mobilenetv2_1.00_96 (Functi  (None, 3, 3, 1280)       2257984   
 onal)                                                           
                                                                 
 global_max_pooling2d (Globa  (None, 1280)             0         
 lMaxPooling2D)                                                  
                                                                 
 sequential (Sequential)     (None, 512)               657920    
                                                                 
 sequential_1 (Sequential)   (None, 256)               132352    
                                                                 
 sequential_2 (Sequential)   (None, 128)               33408     
                                                                 
 sequential_3 (Sequential)   (None, 64)                8512      
                                                                 
 dense_4 (Dense)             (None, 30)                1950      
                                                                 
=================================================================
Total params: 3,092,126
Trainable params: 832,222
Non-trainable params: 2,259,904
_________________________________________________________________
In [43]:
# decaying learing rate
def decay_lr(epoch):
  return 0.01*math.pow(0.77,epoch)

lr_schedule = tf.keras.callbacks.LearningRateScheduler(decay_lr)
lr_on_plateau = tf.keras.callbacks.ReduceLROnPlateau(patience=2)
early_stopping = tf.keras.callbacks.EarlyStopping(
                        monitor='val_loss',
                        patience=3,
                        restore_best_weights=True)

Fit / Train the model¶

In [44]:
history = model.fit(train_ds, epochs=20, 
                    validation_data=val_ds, 
                    callbacks=[early_stopping,lr_on_plateau])
Epoch 1/20
157/157 [==============================] - 47s 236ms/step - loss: 2331.0710 - accuracy: 0.0537 - mae: 44.4892 - mse: 2331.0710 - val_loss: 2080.5728 - val_accuracy: 0.0828 - val_mae: 41.6663 - val_mse: 2080.5728 - lr: 0.0010
Epoch 2/20
157/157 [==============================] - 36s 230ms/step - loss: 1527.6349 - accuracy: 0.0509 - mae: 34.3936 - mse: 1527.6349 - val_loss: 1342.0967 - val_accuracy: 0.0234 - val_mae: 31.8816 - val_mse: 1342.0967 - lr: 0.0010
Epoch 3/20
157/157 [==============================] - 35s 223ms/step - loss: 752.5648 - accuracy: 0.0156 - mae: 21.8043 - mse: 752.5648 - val_loss: 713.2049 - val_accuracy: 0.0359 - val_mae: 21.4781 - val_mse: 713.2049 - lr: 0.0010
Epoch 4/20
157/157 [==============================] - 36s 230ms/step - loss: 315.2984 - accuracy: 0.0290 - mae: 12.8013 - mse: 315.2984 - val_loss: 224.0136 - val_accuracy: 0.0312 - val_mae: 10.7055 - val_mse: 224.0136 - lr: 0.0010
Epoch 5/20
157/157 [==============================] - 35s 222ms/step - loss: 155.3076 - accuracy: 0.0619 - mae: 8.6692 - mse: 155.3076 - val_loss: 118.5531 - val_accuracy: 0.0547 - val_mae: 7.3846 - val_mse: 118.5531 - lr: 0.0010
Epoch 6/20
157/157 [==============================] - 35s 224ms/step - loss: 104.5487 - accuracy: 0.1534 - mae: 6.6954 - mse: 104.5487 - val_loss: 95.2197 - val_accuracy: 0.3016 - val_mae: 6.0651 - val_mse: 95.2197 - lr: 0.0010
Epoch 7/20
157/157 [==============================] - 35s 225ms/step - loss: 87.0340 - accuracy: 0.3460 - mae: 5.7469 - mse: 87.0340 - val_loss: 81.8354 - val_accuracy: 0.4766 - val_mae: 5.4792 - val_mse: 81.8354 - lr: 0.0010
Epoch 8/20
157/157 [==============================] - 36s 226ms/step - loss: 78.0451 - accuracy: 0.6149 - mae: 5.2339 - mse: 78.0451 - val_loss: 74.2922 - val_accuracy: 0.7094 - val_mae: 5.1259 - val_mse: 74.2922 - lr: 0.0010
Epoch 9/20
157/157 [==============================] - 35s 223ms/step - loss: 71.6554 - accuracy: 0.6928 - mae: 4.9463 - mse: 71.6554 - val_loss: 63.5832 - val_accuracy: 0.6734 - val_mae: 4.7148 - val_mse: 63.5832 - lr: 0.0010
Epoch 10/20
157/157 [==============================] - 35s 225ms/step - loss: 64.5646 - accuracy: 0.6892 - mae: 4.6438 - mse: 64.5646 - val_loss: 60.2858 - val_accuracy: 0.6766 - val_mae: 4.4950 - val_mse: 60.2858 - lr: 0.0010
Epoch 11/20
157/157 [==============================] - 36s 228ms/step - loss: 58.2672 - accuracy: 0.6944 - mae: 4.3711 - mse: 58.2672 - val_loss: 49.9047 - val_accuracy: 0.6797 - val_mae: 4.1954 - val_mse: 49.9047 - lr: 0.0010
Epoch 12/20
157/157 [==============================] - 36s 231ms/step - loss: 52.5858 - accuracy: 0.6943 - mae: 4.1293 - mse: 52.5858 - val_loss: 51.4359 - val_accuracy: 0.7219 - val_mae: 4.1679 - val_mse: 51.4359 - lr: 0.0010
Epoch 13/20
157/157 [==============================] - 37s 233ms/step - loss: 48.9068 - accuracy: 0.6927 - mae: 3.9787 - mse: 48.9068 - val_loss: 46.0818 - val_accuracy: 0.6922 - val_mae: 3.7915 - val_mse: 46.0818 - lr: 0.0010
Epoch 14/20
157/157 [==============================] - 35s 223ms/step - loss: 43.3562 - accuracy: 0.6949 - mae: 3.7269 - mse: 43.3562 - val_loss: 36.5814 - val_accuracy: 0.6594 - val_mae: 3.4196 - val_mse: 36.5814 - lr: 0.0010
Epoch 15/20
157/157 [==============================] - 36s 228ms/step - loss: 41.3437 - accuracy: 0.6873 - mae: 3.6517 - mse: 41.3437 - val_loss: 35.2932 - val_accuracy: 0.6750 - val_mae: 3.4177 - val_mse: 35.2932 - lr: 0.0010
Epoch 16/20
157/157 [==============================] - 38s 243ms/step - loss: 34.9885 - accuracy: 0.6814 - mae: 3.4053 - mse: 34.9885 - val_loss: 35.6461 - val_accuracy: 0.6750 - val_mae: 3.4199 - val_mse: 35.6461 - lr: 0.0010
Epoch 17/20
157/157 [==============================] - 36s 230ms/step - loss: 32.0750 - accuracy: 0.6788 - mae: 3.2870 - mse: 32.0750 - val_loss: 27.9889 - val_accuracy: 0.6766 - val_mae: 3.1883 - val_mse: 27.9889 - lr: 0.0010
Epoch 18/20
157/157 [==============================] - 35s 225ms/step - loss: 30.5303 - accuracy: 0.6784 - mae: 3.2136 - mse: 30.5303 - val_loss: 24.0346 - val_accuracy: 0.6687 - val_mae: 2.8760 - val_mse: 24.0346 - lr: 0.0010
Epoch 19/20
157/157 [==============================] - 35s 224ms/step - loss: 27.1651 - accuracy: 0.6802 - mae: 3.0853 - mse: 27.1651 - val_loss: 20.1993 - val_accuracy: 0.7000 - val_mae: 2.8337 - val_mse: 20.1993 - lr: 0.0010
Epoch 20/20
157/157 [==============================] - 37s 234ms/step - loss: 25.6911 - accuracy: 0.6714 - mae: 3.0308 - mse: 25.6911 - val_loss: 21.2234 - val_accuracy: 0.7031 - val_mae: 2.7921 - val_mse: 21.2234 - lr: 0.0010
In [45]:
test.head()
Out[45]:
ImageId Image
0 1 182 183 182 182 180 180 176 169 156 137 124 10...
1 2 76 87 81 72 65 59 64 76 69 42 31 38 49 58 58 4...
2 3 177 176 174 170 169 169 168 166 166 166 161 14...
3 4 176 174 174 175 174 174 176 176 175 171 165 15...
4 5 50 47 44 101 144 149 120 58 48 42 35 35 37 39 ...
In [46]:
# to processes the images from the test dataset to a format suitable for the input of the facial keypoint detection model.
def get_images(test):   
    image = []
    for i in range(0, len(test)):
        img = test['Image'][i].split(' ')
        img = ['0' if x=='' else x for x in img]
        image.append(img)

    image_list = np.array(image, dtype = 'float')
    test_images = image_list.reshape(-1, 96, 96, 1)
    
    return test_images
In [47]:
test_images = get_images(test)
test_images = tf.keras.applications.mobilenet_v2.preprocess_input(test_images)
test_ds = tf.data.Dataset.from_tensor_slices((test_images)).batch(64)

Predictions¶

In [48]:
test_preds = model.predict(test_ds)
28/28 [==============================] - 8s 184ms/step
In [49]:
print('Shape of test predictions', test_preds.shape)
Shape of test predictions (1783, 30)
In [50]:
display_images(test_images[0],test_preds[0])
In [51]:
visualize_data(test_images,test_preds)
In [52]:
visualize_data(test_images,test_preds) 

CNN¶

image.png

CONV: Convolutional kernel layer

RELU: Activation function

POOL: Dimension reduction layer

FC: Fully connection layer

Convolutional kernel¶

Feature extraction

image.png

image.png

Activation function¶

to introduce non-linearity
to make network learn more complex representations

Rectified linear unit,ReLU¶

image-2.png

Pooling layer¶

Dimensionality reduction
Makes network more computationally efficient
Prevent Overfitting

image.png

image.png

Repeat 1-3 to create deep network¶

Fully connected layer¶

Make predictions about the images using features learned by the Covolutional layers

image.png

In [53]:
def data_loader():
    
    # Load dataset file
    data_frame = pd.read_csv('training.csv')
    
    data_frame['Image'] = data_frame['Image'].apply(lambda i: np.fromstring(i, sep=' '))
    data_frame = data_frame.dropna()  # Get only the data with 15 keypoints
   
    # Extract Images pixel values
    imgs_array = np.vstack(data_frame['Image'].values)/ 255.0
    imgs_array = imgs_array.astype(np.float32)    # Normalize, target values to (0, 1)
    imgs_array = imgs_array.reshape(-1, 96, 96, 1)
        
    # Extract labels (key point cords)
    labels_array = data_frame[data_frame.columns[:-1]].values
    labels_array = (labels_array - 48) / 48    # Normalize, traget cordinates to (-1, 1)
    labels_array = labels_array.astype(np.float32) 
    
    # shuffle the train data
# imgs_array, labels_array = shuffle(imgs_array, labels_array, random_state=9)  
    
    return imgs_array, labels_array
In [54]:
# # to check/verify data
# imgs, labels = data_loader()
# print(imgs.shape)
# print(labels.shape)

# n=0
# labels[n] = (labels[n]*48)+48
# image = np.squeeze(imgs[n])
# plt.imshow(image, cmap='gray')
# plt.plot(labels[n][::2], labels[n][1::2], 'ro')
# plt.show()

Building CNN model¶

In [55]:
# Main model
def the_model():
    model = Sequential()
    
    model.add(Conv2D(16, (3,3), padding='same', activation='relu', input_shape=X_train.shape[1:])) # Input shape: (96, 96, 1)
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(32, (3,3), padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(64, (3,3), padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(256, (3,3), padding='same', activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    
    # Convert all values to 1D array
    model.add(Flatten())
    
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(0.2))

    model.add(Dense(30))
    
    return model
In [56]:
X_train, y_train = data_loader()
print("Training datapoint shape: X_train.shape:{}".format(X_train.shape))
print("Training labels shape: y_train.shape:{}".format(y_train.shape))
Training datapoint shape: X_train.shape:(2140, 96, 96, 1)
Training labels shape: y_train.shape:(2140, 30)
In [57]:
epochs = 60
batch_size = 64

model = the_model()
hist = History()

checkpointer = ModelCheckpoint(filepath='checkpoint1.hdf5', 
                               verbose=1, save_best_only=True)
In [58]:
# Compile Model
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])

Model fitting¶

In [59]:
model_fit = model.fit(X_train, y_train, validation_split=0.2, epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, hist], verbose=1)

model.save('model1.h5')
Epoch 1/60
27/27 [==============================] - ETA: 0s - loss: 0.0266 - accuracy: 0.5164
Epoch 1: val_loss improved from inf to 0.01006, saving model to checkpoint1.hdf5
27/27 [==============================] - 10s 334ms/step - loss: 0.0266 - accuracy: 0.5164 - val_loss: 0.0101 - val_accuracy: 0.4322
Epoch 2/60
27/27 [==============================] - ETA: 0s - loss: 0.0077 - accuracy: 0.6893
Epoch 2: val_loss improved from 0.01006 to 0.00765, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 325ms/step - loss: 0.0077 - accuracy: 0.6893 - val_loss: 0.0077 - val_accuracy: 0.4322
Epoch 3/60
27/27 [==============================] - ETA: 0s - loss: 0.0058 - accuracy: 0.7179
Epoch 3: val_loss did not improve from 0.00765
27/27 [==============================] - 8s 305ms/step - loss: 0.0058 - accuracy: 0.7179 - val_loss: 0.0078 - val_accuracy: 0.4322
Epoch 4/60
27/27 [==============================] - ETA: 0s - loss: 0.0053 - accuracy: 0.7290
Epoch 4: val_loss did not improve from 0.00765
27/27 [==============================] - 8s 312ms/step - loss: 0.0053 - accuracy: 0.7290 - val_loss: 0.0079 - val_accuracy: 0.4322
Epoch 5/60
27/27 [==============================] - ETA: 0s - loss: 0.0049 - accuracy: 0.7371
Epoch 5: val_loss did not improve from 0.00765
27/27 [==============================] - 8s 312ms/step - loss: 0.0049 - accuracy: 0.7371 - val_loss: 0.0077 - val_accuracy: 0.4322
Epoch 6/60
27/27 [==============================] - ETA: 0s - loss: 0.0047 - accuracy: 0.7430
Epoch 6: val_loss improved from 0.00765 to 0.00756, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 330ms/step - loss: 0.0047 - accuracy: 0.7430 - val_loss: 0.0076 - val_accuracy: 0.4322
Epoch 7/60
27/27 [==============================] - ETA: 0s - loss: 0.0044 - accuracy: 0.7436
Epoch 7: val_loss improved from 0.00756 to 0.00738, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 317ms/step - loss: 0.0044 - accuracy: 0.7436 - val_loss: 0.0074 - val_accuracy: 0.4322
Epoch 8/60
27/27 [==============================] - ETA: 0s - loss: 0.0040 - accuracy: 0.7418
Epoch 8: val_loss did not improve from 0.00738
27/27 [==============================] - 8s 290ms/step - loss: 0.0040 - accuracy: 0.7418 - val_loss: 0.0074 - val_accuracy: 0.4322
Epoch 9/60
27/27 [==============================] - ETA: 0s - loss: 0.0038 - accuracy: 0.7617
Epoch 9: val_loss improved from 0.00738 to 0.00683, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 294ms/step - loss: 0.0038 - accuracy: 0.7617 - val_loss: 0.0068 - val_accuracy: 0.4299
Epoch 10/60
27/27 [==============================] - ETA: 0s - loss: 0.0033 - accuracy: 0.7582
Epoch 10: val_loss improved from 0.00683 to 0.00654, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 302ms/step - loss: 0.0033 - accuracy: 0.7582 - val_loss: 0.0065 - val_accuracy: 0.4322
Epoch 11/60
27/27 [==============================] - ETA: 0s - loss: 0.0029 - accuracy: 0.7558
Epoch 11: val_loss improved from 0.00654 to 0.00606, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 293ms/step - loss: 0.0029 - accuracy: 0.7558 - val_loss: 0.0061 - val_accuracy: 0.4369
Epoch 12/60
27/27 [==============================] - ETA: 0s - loss: 0.0027 - accuracy: 0.7605
Epoch 12: val_loss improved from 0.00606 to 0.00595, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 307ms/step - loss: 0.0027 - accuracy: 0.7605 - val_loss: 0.0059 - val_accuracy: 0.4369
Epoch 13/60
27/27 [==============================] - ETA: 0s - loss: 0.0026 - accuracy: 0.7704
Epoch 13: val_loss improved from 0.00595 to 0.00585, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 305ms/step - loss: 0.0026 - accuracy: 0.7704 - val_loss: 0.0059 - val_accuracy: 0.4416
Epoch 14/60
27/27 [==============================] - ETA: 0s - loss: 0.0024 - accuracy: 0.8002
Epoch 14: val_loss improved from 0.00585 to 0.00499, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 298ms/step - loss: 0.0024 - accuracy: 0.8002 - val_loss: 0.0050 - val_accuracy: 0.4486
Epoch 15/60
27/27 [==============================] - ETA: 0s - loss: 0.0022 - accuracy: 0.7845
Epoch 15: val_loss improved from 0.00499 to 0.00491, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 333ms/step - loss: 0.0022 - accuracy: 0.7845 - val_loss: 0.0049 - val_accuracy: 0.4533
Epoch 16/60
27/27 [==============================] - ETA: 0s - loss: 0.0022 - accuracy: 0.7886
Epoch 16: val_loss improved from 0.00491 to 0.00479, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 285ms/step - loss: 0.0022 - accuracy: 0.7886 - val_loss: 0.0048 - val_accuracy: 0.4673
Epoch 17/60
27/27 [==============================] - ETA: 0s - loss: 0.0019 - accuracy: 0.8002
Epoch 17: val_loss improved from 0.00479 to 0.00441, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 318ms/step - loss: 0.0019 - accuracy: 0.8002 - val_loss: 0.0044 - val_accuracy: 0.4720
Epoch 18/60
27/27 [==============================] - ETA: 0s - loss: 0.0019 - accuracy: 0.8178
Epoch 18: val_loss improved from 0.00441 to 0.00439, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 311ms/step - loss: 0.0019 - accuracy: 0.8178 - val_loss: 0.0044 - val_accuracy: 0.4790
Epoch 19/60
27/27 [==============================] - ETA: 0s - loss: 0.0018 - accuracy: 0.8037
Epoch 19: val_loss improved from 0.00439 to 0.00402, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 316ms/step - loss: 0.0018 - accuracy: 0.8037 - val_loss: 0.0040 - val_accuracy: 0.4860
Epoch 20/60
27/27 [==============================] - ETA: 0s - loss: 0.0017 - accuracy: 0.8055
Epoch 20: val_loss did not improve from 0.00402
27/27 [==============================] - 8s 297ms/step - loss: 0.0017 - accuracy: 0.8055 - val_loss: 0.0041 - val_accuracy: 0.4790
Epoch 21/60
27/27 [==============================] - ETA: 0s - loss: 0.0016 - accuracy: 0.8189
Epoch 21: val_loss improved from 0.00402 to 0.00399, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 305ms/step - loss: 0.0016 - accuracy: 0.8189 - val_loss: 0.0040 - val_accuracy: 0.5257
Epoch 22/60
27/27 [==============================] - ETA: 0s - loss: 0.0016 - accuracy: 0.8236
Epoch 22: val_loss improved from 0.00399 to 0.00389, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 327ms/step - loss: 0.0016 - accuracy: 0.8236 - val_loss: 0.0039 - val_accuracy: 0.4907
Epoch 23/60
27/27 [==============================] - ETA: 0s - loss: 0.0015 - accuracy: 0.8318
Epoch 23: val_loss improved from 0.00389 to 0.00384, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 307ms/step - loss: 0.0015 - accuracy: 0.8318 - val_loss: 0.0038 - val_accuracy: 0.4883
Epoch 24/60
27/27 [==============================] - ETA: 0s - loss: 0.0014 - accuracy: 0.8341
Epoch 24: val_loss improved from 0.00384 to 0.00382, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 304ms/step - loss: 0.0014 - accuracy: 0.8341 - val_loss: 0.0038 - val_accuracy: 0.5280
Epoch 25/60
27/27 [==============================] - ETA: 0s - loss: 0.0015 - accuracy: 0.8405
Epoch 25: val_loss did not improve from 0.00382
27/27 [==============================] - 8s 298ms/step - loss: 0.0015 - accuracy: 0.8405 - val_loss: 0.0039 - val_accuracy: 0.5023
Epoch 26/60
27/27 [==============================] - ETA: 0s - loss: 0.0014 - accuracy: 0.8359
Epoch 26: val_loss improved from 0.00382 to 0.00363, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 308ms/step - loss: 0.0014 - accuracy: 0.8359 - val_loss: 0.0036 - val_accuracy: 0.4977
Epoch 27/60
27/27 [==============================] - ETA: 0s - loss: 0.0014 - accuracy: 0.8435
Epoch 27: val_loss did not improve from 0.00363
27/27 [==============================] - 8s 309ms/step - loss: 0.0014 - accuracy: 0.8435 - val_loss: 0.0039 - val_accuracy: 0.5070
Epoch 28/60
27/27 [==============================] - ETA: 0s - loss: 0.0014 - accuracy: 0.8353
Epoch 28: val_loss did not improve from 0.00363
27/27 [==============================] - 8s 306ms/step - loss: 0.0014 - accuracy: 0.8353 - val_loss: 0.0039 - val_accuracy: 0.5304
Epoch 29/60
27/27 [==============================] - ETA: 0s - loss: 0.0013 - accuracy: 0.8341
Epoch 29: val_loss improved from 0.00363 to 0.00356, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 318ms/step - loss: 0.0013 - accuracy: 0.8341 - val_loss: 0.0036 - val_accuracy: 0.5257
Epoch 30/60
27/27 [==============================] - ETA: 0s - loss: 0.0012 - accuracy: 0.8528
Epoch 30: val_loss improved from 0.00356 to 0.00351, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 301ms/step - loss: 0.0012 - accuracy: 0.8528 - val_loss: 0.0035 - val_accuracy: 0.5070
Epoch 31/60
27/27 [==============================] - ETA: 0s - loss: 0.0012 - accuracy: 0.8423
Epoch 31: val_loss did not improve from 0.00351
27/27 [==============================] - 8s 299ms/step - loss: 0.0012 - accuracy: 0.8423 - val_loss: 0.0036 - val_accuracy: 0.5023
Epoch 32/60
27/27 [==============================] - ETA: 0s - loss: 0.0012 - accuracy: 0.8511
Epoch 32: val_loss did not improve from 0.00351
27/27 [==============================] - 8s 309ms/step - loss: 0.0012 - accuracy: 0.8511 - val_loss: 0.0036 - val_accuracy: 0.4930
Epoch 33/60
27/27 [==============================] - ETA: 0s - loss: 0.0013 - accuracy: 0.8475
Epoch 33: val_loss did not improve from 0.00351
27/27 [==============================] - 8s 308ms/step - loss: 0.0013 - accuracy: 0.8475 - val_loss: 0.0036 - val_accuracy: 0.5047
Epoch 34/60
27/27 [==============================] - ETA: 0s - loss: 0.0011 - accuracy: 0.8458
Epoch 34: val_loss improved from 0.00351 to 0.00349, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 302ms/step - loss: 0.0011 - accuracy: 0.8458 - val_loss: 0.0035 - val_accuracy: 0.5397
Epoch 35/60
27/27 [==============================] - ETA: 0s - loss: 0.0011 - accuracy: 0.8563
Epoch 35: val_loss improved from 0.00349 to 0.00337, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 287ms/step - loss: 0.0011 - accuracy: 0.8563 - val_loss: 0.0034 - val_accuracy: 0.5187
Epoch 36/60
27/27 [==============================] - ETA: 0s - loss: 0.0011 - accuracy: 0.8516
Epoch 36: val_loss did not improve from 0.00337
27/27 [==============================] - 8s 298ms/step - loss: 0.0011 - accuracy: 0.8516 - val_loss: 0.0035 - val_accuracy: 0.5257
Epoch 37/60
27/27 [==============================] - ETA: 0s - loss: 0.0011 - accuracy: 0.8715
Epoch 37: val_loss did not improve from 0.00337
27/27 [==============================] - 8s 308ms/step - loss: 0.0011 - accuracy: 0.8715 - val_loss: 0.0039 - val_accuracy: 0.5421
Epoch 38/60
27/27 [==============================] - ETA: 0s - loss: 0.0012 - accuracy: 0.8569
Epoch 38: val_loss did not improve from 0.00337
27/27 [==============================] - 9s 317ms/step - loss: 0.0012 - accuracy: 0.8569 - val_loss: 0.0034 - val_accuracy: 0.4930
Epoch 39/60
27/27 [==============================] - ETA: 0s - loss: 0.0010 - accuracy: 0.8557
Epoch 39: val_loss did not improve from 0.00337
27/27 [==============================] - 8s 291ms/step - loss: 0.0010 - accuracy: 0.8557 - val_loss: 0.0034 - val_accuracy: 0.5257
Epoch 40/60
27/27 [==============================] - ETA: 0s - loss: 9.9151e-04 - accuracy: 0.8528
Epoch 40: val_loss improved from 0.00337 to 0.00333, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 304ms/step - loss: 9.9151e-04 - accuracy: 0.8528 - val_loss: 0.0033 - val_accuracy: 0.5304
Epoch 41/60
27/27 [==============================] - ETA: 0s - loss: 9.7985e-04 - accuracy: 0.8610
Epoch 41: val_loss did not improve from 0.00333
27/27 [==============================] - 9s 317ms/step - loss: 9.7985e-04 - accuracy: 0.8610 - val_loss: 0.0034 - val_accuracy: 0.5514
Epoch 42/60
27/27 [==============================] - ETA: 0s - loss: 9.7914e-04 - accuracy: 0.8639
Epoch 42: val_loss did not improve from 0.00333
27/27 [==============================] - 8s 287ms/step - loss: 9.7914e-04 - accuracy: 0.8639 - val_loss: 0.0035 - val_accuracy: 0.5234
Epoch 43/60
27/27 [==============================] - ETA: 0s - loss: 9.5562e-04 - accuracy: 0.8651
Epoch 43: val_loss did not improve from 0.00333
27/27 [==============================] - 8s 296ms/step - loss: 9.5562e-04 - accuracy: 0.8651 - val_loss: 0.0034 - val_accuracy: 0.5164
Epoch 44/60
27/27 [==============================] - ETA: 0s - loss: 9.6482e-04 - accuracy: 0.8516
Epoch 44: val_loss did not improve from 0.00333
27/27 [==============================] - 9s 325ms/step - loss: 9.6482e-04 - accuracy: 0.8516 - val_loss: 0.0034 - val_accuracy: 0.5491
Epoch 45/60
27/27 [==============================] - ETA: 0s - loss: 9.3110e-04 - accuracy: 0.8674
Epoch 45: val_loss improved from 0.00333 to 0.00325, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 315ms/step - loss: 9.3110e-04 - accuracy: 0.8674 - val_loss: 0.0033 - val_accuracy: 0.5304
Epoch 46/60
27/27 [==============================] - ETA: 0s - loss: 9.2465e-04 - accuracy: 0.8762
Epoch 46: val_loss did not improve from 0.00325
27/27 [==============================] - 9s 321ms/step - loss: 9.2465e-04 - accuracy: 0.8762 - val_loss: 0.0034 - val_accuracy: 0.5748
Epoch 47/60
27/27 [==============================] - ETA: 0s - loss: 9.1168e-04 - accuracy: 0.8674
Epoch 47: val_loss did not improve from 0.00325
27/27 [==============================] - 8s 310ms/step - loss: 9.1168e-04 - accuracy: 0.8674 - val_loss: 0.0033 - val_accuracy: 0.5421
Epoch 48/60
27/27 [==============================] - ETA: 0s - loss: 8.7742e-04 - accuracy: 0.8674
Epoch 48: val_loss did not improve from 0.00325
27/27 [==============================] - 8s 304ms/step - loss: 8.7742e-04 - accuracy: 0.8674 - val_loss: 0.0033 - val_accuracy: 0.5537
Epoch 49/60
27/27 [==============================] - ETA: 0s - loss: 9.0614e-04 - accuracy: 0.8756
Epoch 49: val_loss did not improve from 0.00325
27/27 [==============================] - 8s 308ms/step - loss: 9.0614e-04 - accuracy: 0.8756 - val_loss: 0.0033 - val_accuracy: 0.5187
Epoch 50/60
27/27 [==============================] - ETA: 0s - loss: 8.5400e-04 - accuracy: 0.8703
Epoch 50: val_loss improved from 0.00325 to 0.00317, saving model to checkpoint1.hdf5
27/27 [==============================] - 9s 319ms/step - loss: 8.5400e-04 - accuracy: 0.8703 - val_loss: 0.0032 - val_accuracy: 0.5350
Epoch 51/60
27/27 [==============================] - ETA: 0s - loss: 8.2141e-04 - accuracy: 0.8785
Epoch 51: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 309ms/step - loss: 8.2141e-04 - accuracy: 0.8785 - val_loss: 0.0034 - val_accuracy: 0.5794
Epoch 52/60
27/27 [==============================] - ETA: 0s - loss: 8.6456e-04 - accuracy: 0.8703
Epoch 52: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 303ms/step - loss: 8.6456e-04 - accuracy: 0.8703 - val_loss: 0.0036 - val_accuracy: 0.5607
Epoch 53/60
27/27 [==============================] - ETA: 0s - loss: 9.1109e-04 - accuracy: 0.8732
Epoch 53: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 298ms/step - loss: 9.1109e-04 - accuracy: 0.8732 - val_loss: 0.0034 - val_accuracy: 0.5514
Epoch 54/60
27/27 [==============================] - ETA: 0s - loss: 7.9785e-04 - accuracy: 0.8855
Epoch 54: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 301ms/step - loss: 7.9785e-04 - accuracy: 0.8855 - val_loss: 0.0033 - val_accuracy: 0.5491
Epoch 55/60
27/27 [==============================] - ETA: 0s - loss: 7.7316e-04 - accuracy: 0.8773
Epoch 55: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 291ms/step - loss: 7.7316e-04 - accuracy: 0.8773 - val_loss: 0.0032 - val_accuracy: 0.5841
Epoch 56/60
27/27 [==============================] - ETA: 0s - loss: 8.1521e-04 - accuracy: 0.8732
Epoch 56: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 302ms/step - loss: 8.1521e-04 - accuracy: 0.8732 - val_loss: 0.0034 - val_accuracy: 0.5537
Epoch 57/60
27/27 [==============================] - ETA: 0s - loss: 8.2878e-04 - accuracy: 0.8797
Epoch 57: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 280ms/step - loss: 8.2878e-04 - accuracy: 0.8797 - val_loss: 0.0035 - val_accuracy: 0.5187
Epoch 58/60
27/27 [==============================] - ETA: 0s - loss: 7.6416e-04 - accuracy: 0.8709
Epoch 58: val_loss did not improve from 0.00317
27/27 [==============================] - 8s 311ms/step - loss: 7.6416e-04 - accuracy: 0.8709 - val_loss: 0.0032 - val_accuracy: 0.5467
Epoch 59/60
27/27 [==============================] - ETA: 0s - loss: 7.5469e-04 - accuracy: 0.8738
Epoch 59: val_loss improved from 0.00317 to 0.00315, saving model to checkpoint1.hdf5
27/27 [==============================] - 8s 306ms/step - loss: 7.5469e-04 - accuracy: 0.8738 - val_loss: 0.0031 - val_accuracy: 0.5678
Epoch 60/60
27/27 [==============================] - ETA: 0s - loss: 7.5889e-04 - accuracy: 0.8843
Epoch 60: val_loss did not improve from 0.00315
27/27 [==============================] - 8s 281ms/step - loss: 7.5889e-04 - accuracy: 0.8843 - val_loss: 0.0032 - val_accuracy: 0.5467
In [60]:
# Plot train and validation loss
plt.plot(hist.history['loss'], label='Train Loss')
plt.plot(hist.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()

Test image file with the model¶

In [61]:
def detect_points(face_img):
    me  = np.array(face_img)/255
    x_test = np.expand_dims(me, axis=0)
    x_test = np.expand_dims(x_test, axis=3)

    y_test = model.predict(x_test)
    label_points = (np.squeeze(y_test)*48)+48 
    
    return label_points
In [62]:
cv2.CascadeClassifier
# Load haarcascade
face_cascade = cv2.CascadeClassifier('C:\\Users\\PRITI CHAUDHARY\\Python\\Lib\\site-packages\\cv2\\data\\haarcascade_frontalface_default.xml')
dimensions = (96, 96)
In [63]:
# Enter the path to the test image
img = cv2.imread('trio.jpeg')
default_img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
plt.imshow(default_img)
Out[63]:
<matplotlib.image.AxesImage at 0x20791aafbb0>
In [64]:
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray_img)
Out[64]:
<matplotlib.image.AxesImage at 0x20791a7fb50>
In [65]:
faces = face_cascade.detectMultiScale(gray_img, 1.3, 5)
faces_img = np.copy(gray_img)
plt.rcParams["axes.grid"] = False
In [66]:
all_x_cords = []
all_y_cords = []

for i, (x,y,w,h) in enumerate(faces):
    
    h += 10
    w += 10
    x -= 5
    y -= 5
    
    just_face = cv2.resize(gray_img[y:y+h,x:x+w], dimensions)
    cv2.rectangle(faces_img,(x,y),(x+w,y+h),(255,0,0),1)
    
    scale_val_x = w/96
    scale_val_y = h/96
    
    label_point = detect_points(just_face)
    all_x_cords.append((label_point[::2]*scale_val_x)+x)
    all_y_cords.append((label_point[1::2]*scale_val_y)+y)
   
   
    plt.imshow(just_face, cmap='gray')
    plt.plot(label_point[::2], label_point[1::2], 'ro', markersize=5)
    plt.show()
    
    
plt.imshow(default_img)    
plt.plot(all_x_cords, all_y_cords, 'wo',  markersize=1)
plt.show()
1/1 [==============================] - 0s 156ms/step
1/1 [==============================] - 0s 31ms/step
1/1 [==============================] - 0s 47ms/step
1/1 [==============================] - 0s 31ms/step

Thank you:)¶

In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]: